Multi-task learning (MTL) models have demonstrated impressive results in computer vision, natural language processing, and recommender systems. Even though many approaches have been proposed, how well these approaches balance different tasks on each parameter still remains unclear. In this paper, we propose to measure the task dominance degree of a parameter by the total updates of each task on this parameter. Specifically, we compute the total updates by the exponentially decaying Average of the squared Updates (AU) on a parameter from the corresponding task.Based on this novel metric, we observe that many parameters in existing MTL methods, especially those in the higher shared layers, are still dominated by one or several tasks. The dominance of AU is mainly due to the dominance of accumulative gradients from one or several tasks. Motivated by this, we propose a Task-wise Adaptive learning rate approach, AdaTask in short, to separate the \emph{accumulative gradients} and hence the learning rate of each task for each parameter in adaptive learning rate approaches (e.g., AdaGrad, RMSProp, and Adam). Comprehensive experiments on computer vision and recommender system MTL datasets demonstrate that AdaTask significantly improves the performance of dominated tasks, resulting SOTA average task-wise performance. Analysis on both synthetic and real-world datasets shows AdaTask balance parameters in every shared layer well.
translated by 谷歌翻译
真实世界的文本应用程序通常涉及组成广泛的文本控制操作,例如编辑文本W.R.T.属性,操纵关键字和结构,并生成所需属性的新文本。事先的工作通常会学习/芬太尼语言模型(LM)以执行操作的个人或特定子集。最近的研究以插件方式研究了合并操作,通常在复杂序列空间中以昂贵的搜索或优化进行了研究。本文提出了一种新的有效方法,用于在紧凑的文本潜在空间中进行可复合的文本操作。文本潜在矢量的低维度和不同性使我们能够基于给定的任意插入运算符(例如属性分类器)基于普通微分方程(ODE)开发有效的采样器。通过通过有效的适应性将预告片的LMS(例如GPT2)连接到潜在空间,然后我们将采样向量解码为所需的文本序列。灵活的方法允许使用来自不同域中的任何相关数据获取的各种控制操作员(情感,时态,形式,关键字等)。实验表明,在我们的方法中构成这些操作员可以生成或编辑高质量文本,从而在发电质量和效率方面显着改善了以前的方法。
translated by 谷歌翻译
实现通用语言情报是自然语言处理的长期目标,标准评估基准发挥基本和指导作用。我们认为,对于通用语言智能评估,基准本身需要全面和系统。为此,我们提出了Cuge,一种中文语言理解和生成评估基准,具有以下特征:(1)分层基准框架,其中数据集主要选择和组织语言能力 - 任务数据集层次结构。 (2)多级评分策略,其中基于分层框架提供了不同级别的模型性能。为了促进CUGE,我们提供了一个公共排行榜,可以自定义,以支持灵活的模型判断标准。代表性预先训练的语言模型的评估结果表明了对通用语言智能的完善的充足空间。 Cuge在Cuge.baai.ac.cn上公开提供。
translated by 谷歌翻译
生成的型号推理需要机器生成描述日常情景的句子,这是几种概念,最近引起了很多关注。然而,现有模型不能表现和人类,因为它们产生的句子通常是难以置疑和语法的不正确。在本文中,灵感来自人类创造句子的过程,我们提出了一种新颖的知识增强的致辞生成框架,被称为kgr ^ 4,由四个阶段组成:检索,回顾,精炼,重新思考。在此框架下,我们首先执行检索以搜索从外部语料库作为原型的相关句子。然后,我们训练发电机编辑或复制这些原型以生成候选句子,其中基于AutoEncoder的炼油器将修复候选句子。最后,我们从具有不同超参数的生成器产生的候选句子中选择输出句子。对蒙古基准测试的实验结果和深入分析强烈展示了我们框架的有效性。特别是,KGR ^ 4获得官方排行榜中的33.56个香料点,优于前面报告的最佳结果2.49香料点,实现最先进的性能。
translated by 谷歌翻译
在图像中恢复任意缺失区域的合理和现实内容是一个重要而挑战性的任务。尽管最近的图像批量模型在生动的视觉细节方面取得了重大进展,但它们仍然可以导致纹理模糊或由于在处理更复杂的场景时由于上下文模糊而导致的结构扭曲。为了解决这个问题,我们提出了通过学习来自特定借口任务的多尺度语义代理的想法激励的语义金字塔网络(SPN)可以大大使图像中局部缺失内容的恢复极大地利益。 SPN由两个组件组成。首先,它将语义前视图从托管模型蒸馏到多尺度特征金字塔,实现对全局背景和局部结构的一致了解。在现有的学习者内,我们提供了一个可选模块,用于变分推理,以实现由各种学习的前沿驱动的概率图像染色。 SPN的第二组件是完全上下文感知的图像生成器,其在与(随机)先前金字塔一起自适应地和逐渐地改进低级视觉表示。我们将先前的学习者和图像发生器培训为统一模型,而无需任何后处理。我们的方法在多个数据集中实现了本领域的最先进,包括在确定性和概率的侵略设置下,包括Parket2,Paris Streetview,Celeba和Celeba-HQ。
translated by 谷歌翻译
弱监督的时间行动本地化旨在从视频级标签学习实例级别动作模式,其中重大挑战是动作情境混淆。为了克服这一挑战,最近的一个工作建立了一个动作单击监督框。它需要类似的注释成本,但与传统的弱势监督方法相比,可以稳步提高本地化性能。在本文中,通过揭示现有方法的性能瓶颈主要来自后台错误,我们发现更强大的动作定位器可以在背景视频帧上的标签上培训,而不是动作帧上的标签。为此,我们将动作单击监控转换为背景单击监控,并开发一种名为Backtal的新方法。具体地,背塔在背景视频帧上实现两倍建模,即位置建模和特征建模。在适当的建模中,我们不仅在带注释的视频帧上进行监督学习,而且还设计得分分离模块,以扩大潜在的动作帧和背景之间的分数差异。在特征建模中,我们提出了一个亲和力模块,以在计算时间卷积时测量相邻帧之间的特定于帧特定的相似性,并在计算时间卷积时动态地参加信息邻居。进行了三个基准测试的广泛实验,展示了建立的背部的高性能和所提出的背景下单击监督的合理性。代码可用于https://github.com/vididle/backtal。
translated by 谷歌翻译
最近关于其他方式的核化图像T1辅助MRI重建的研究表明,进一步加速MRI收购其他方式的潜力。大多数最先进的方法通过开发用于固定的欠采样模式的网络架构来实现改进,而不完全利用方式之间的互补信息。尽管可以简单地修改现有的下采样模式学习算法以允许完全采样的T1加权MR图像来辅助模式学习,但是可以实现重建任务的显着改进。为此,我们提出了一个迭代框架,优化了MRI获取的另一种方式的采样下采样模式,可以在不同的下抽样因子中补充完全采样的T1加权MR图像,同时共同优化T1辅助MRI重建模型。具体地,我们所提出的方法利用两种模式之间的潜在信息的差异来确定可以最大化T1加权MR图像的辅助功率在改善MRI重建时最大化的采样模式。与常用的下采样模式和最先进的方法相比,我们在公共数据集中展示了我们在公共数据集上的学习的下采样模式的卓越表现,可以联合优化重建网络和欠采样模式以8倍的取样因子。
translated by 谷歌翻译
Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not. This paradigm, although effective, is inefficient because the detectors have to go through all patches, severely hindering the inference speed. This paper presents an Objectness Activation Network (OAN) to help detectors focus on fewer patches but achieve more efficient inference and more accurate results, enabling a simple and effective solution to object detection in large images. In brief, OAN is a light fully-convolutional network for judging whether each patch contains objects or not, which can be easily integrated into many object detectors and jointly trained with them end-to-end. We extensively evaluate our OAN with five advanced detectors. Using OAN, all five detectors acquire more than 30.0% speed-up on three large-scale aerial image datasets, meanwhile with consistent accuracy improvements. On extremely large Gaofen-2 images (29200$\times$27620 pixels), our OAN improves the detection speed by 70.5%. Moreover, we extend our OAN to driving-scene object detection and 4K video object detection, boosting the detection speed by 112.1% and 75.0%, respectively, without sacrificing the accuracy. Code is available at https://github.com/Ranchosky/OAN.
translated by 谷歌翻译
Graph Neural Networks(GNNs) are a family of neural models tailored for graph-structure data and have shown superior performance in learning representations for graph-structured data. However, training GNNs on large graphs remains challenging and a promising direction is distributed GNN training, which is to partition the input graph and distribute the workload across multiple machines. The key bottleneck of the existing distributed GNNs training framework is the across-machine communication induced by the dependency on the graph data and aggregation operator of GNNs. In this paper, we study the communication complexity during distributed GNNs training and propose a simple lossless communication reduction method, termed the Aggregation before Communication (ABC) method. ABC method exploits the permutation-invariant property of the GNNs layer and leads to a paradigm where vertex-cut is proved to admit a superior communication performance than the currently popular paradigm (edge-cut). In addition, we show that the new partition paradigm is particularly ideal in the case of dynamic graphs where it is infeasible to control the edge placement due to the unknown stochastic of the graph-changing process.
translated by 谷歌翻译
Hashing has been widely researched to solve the large-scale approximate nearest neighbor search problem owing to its time and storage superiority. In recent years, a number of online hashing methods have emerged, which can update the hash functions to adapt to the new stream data and realize dynamic retrieval. However, existing online hashing methods are required to update the whole database with the latest hash functions when a query arrives, which leads to low retrieval efficiency with the continuous increase of the stream data. On the other hand, these methods ignore the supervision relationship among the examples, especially in the multi-label case. In this paper, we propose a novel Fast Online Hashing (FOH) method which only updates the binary codes of a small part of the database. To be specific, we first build a query pool in which the nearest neighbors of each central point are recorded. When a new query arrives, only the binary codes of the corresponding potential neighbors are updated. In addition, we create a similarity matrix which takes the multi-label supervision information into account and bring in the multi-label projection loss to further preserve the similarity among the multi-label data. The experimental results on two common benchmarks show that the proposed FOH can achieve dramatic superiority on query time up to 6.28 seconds less than state-of-the-art baselines with competitive retrieval accuracy.
translated by 谷歌翻译